A Case Study With MNIST

In this case study, we train the model with $16 \times 16$ MNIST images and evaluate its performance.

Import modules

Define hyperparameters

Load MNIST dataset

We use TensorFlow to directly load MNIST images. The dimensions of each image is $28 \times 28$.

In order to fit the images onto $8$ cubits, we downscale them to $16 \times 16$ with cubic interpolation.

This is a sample MNIST image depicting the number $5$.

qGAN training

Define a helper function that enables parallelization in the training process.

Generate $36$ random indices to sample the dataset.

Train the model in parallel. This is gonna take a while, so grab a cup of coffee while you're at it!

Evaluation

Define the cross-entropy between two discrete probability distributions.

Compute and plot the evolution of cross-entropy for each image, as well as the average across all images.

Observe that the mean cross-entropy decreases rapidly as the number of iteration increases. The value stabilizes at around $1.5 \times 10^{-2}$.

Results

The generated images closedly resemble the original. While some generation artifacts can be observed in the low-probability background, the high-probability foreground is clearly distinguishable.

Generated images

Original images